AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Enhanced Logical Reasoning

# Enhanced Logical Reasoning

Granite 4.0 Tiny Preview
Apache-2.0
Granite-4-Tiny-Preview is a fine-grained Mixture of Experts (MoE) instruction-tuned model with 7 billion parameters, developed based on Granite-4.0-Tiny-Base-Preview, suitable for general instruction-following tasks.
Large Language Model Transformers
G
ibm-granite
7,906
108
Llama 3.1 8B Athena Apollo Exp
Apache-2.0
A powerful AI model fused via MergeKit from multiple Llama-3.1 architecture models, excelling in instruction following, role-playing, and creative writing
Large Language Model Transformers English
L
ZeroXClem
31
3
Flan T5 Large
Apache-2.0
FLAN-T5 is an instruction-fine-tuned language model based on T5, supporting 60+ languages, achieving stronger performance through fine-tuning on 1000+ tasks with the same parameter count
Large Language Model Supports Multiple Languages
F
google
589.25k
749
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase